表示技术的快速发展和大规模医学成像数据的可用性必须在3D医学图像分析中快速增加机器学习的使用。特别是,深度卷积神经网络(D-CNN)是关键参与者,并被医学成像界采用,以协助临床医生和医学专家进行疾病诊断。然而,培训深层神经网络,例如在高分辨率3D体积的计算机断层扫描(CT)扫描中进行诊断任务的D-CNN带来了强大的计算挑战。这提出了开发基于深度学习的方法,这些方法在2D图像中具有强大的学习表示形式,而是3D扫描。在本文中,我们提出了一种新的策略,以根据沿轴的相邻切片的描述来训练CT扫描上的\ emph {slice level}分类器。特别是,每一个都是通过卷积神经网络(CNN)提取的。该方法适用于具有每片标签的CT数据集,例如RSNA颅内出血(ICH)数据集,该数据集旨在预测ICH的存在并将其分类为5个不同的子类型。我们在RSNA ICH挑战的最佳4 \%最佳解决方案中获得了单个模型,其中允许模型集成。实验还表明,所提出的方法显着优于CQ500上的基线模型。所提出的方法是一般的,可以应用于其他3D医学诊断任务,例如MRI成像。为了鼓励该领域的新进步,我们将在接受论文后制定我们的代码和预培训模型。
translated by 谷歌翻译
高级深度学习(DL)算法可以预测患者基于乳房成像报告和数据系统(BI-RAD)和密度标准的患者发育乳腺癌的风险。最近的研究表明,多视图分析的结合改善了整体乳房考试分类。在本文中,我们提出了一种新的多视图DL方法,用于乳房X线照片的Bi-RAD和密度评估。所提出的方法首先部署深度卷积网络,用于分别对每个视图进行特征提取。然后将提取的特征堆叠并馈入光梯度升压机(LightGBM)分类器中以预测Bi-RAD和密度分数。我们对内部乳房数据集和公共数据集数字数据库进行广泛的实验,用于筛选乳房X线摄影(DDSM)。实验结果表明,所提出的方法在两个基准数据集中突出了巨大的边距(内部数据集5%,DDSM数据集10%)优于两个基准分类方法。这些结果突出了组合多视图信息来改善乳腺癌风险预测性能的重要作用。
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Manually analyzing spermatozoa is a tremendous task for biologists due to the many fast-moving spermatozoa, causing inconsistencies in the quality of the assessments. Therefore, computer-assisted sperm analysis (CASA) has become a popular solution. Despite this, more data is needed to train supervised machine learning approaches in order to improve accuracy and reliability. In this regard, we provide a dataset called VISEM-Tracking with 20 video recordings of 30s of spermatozoa with manually annotated bounding-box coordinates and a set of sperm characteristics analyzed by experts in the domain. VISEM-Tracking is an extension of the previously published VISEM dataset. In addition to the annotated data, we provide unlabeled video clips for easy-to-use access and analysis of the data. As part of this paper, we present baseline sperm detection performances using the YOLOv5 deep learning model trained on the VISEM-Tracking dataset. As a result, the dataset can be used to train complex deep-learning models to analyze spermatozoa. The dataset is publicly available at https://zenodo.org/record/7293726.
translated by 谷歌翻译
Predictive simulations of the shock-to-detonation transition (SDT) in heterogeneous energetic materials (EM) are vital to the design and control of their energy release and sensitivity. Due to the complexity of the thermo-mechanics of EM during the SDT, both macro-scale response and sub-grid mesoscale energy localization must be captured accurately. This work proposes an efficient and accurate multiscale framework for SDT simulations of EM. We employ deep learning to model the mesoscale energy localization of shock-initiated EM microstructures upon which prediction results are used to supply reaction progress rate information to the macroscale SDT simulation. The proposed multiscale modeling framework is divided into two stages. First, a physics-aware recurrent convolutional neural network (PARC) is used to model the mesoscale energy localization of shock-initiated heterogeneous EM microstructures. PARC is trained using direct numerical simulations (DNS) of hotspot ignition and growth within microstructures of pressed HMX material subjected to different input shock strengths. After training, PARC is employed to supply hotspot ignition and growth rates for macroscale SDT simulations. We show that PARC can play the role of a surrogate model in a multiscale simulation framework, while drastically reducing the computation cost and providing improved representations of the sub-grid physics. The proposed multiscale modeling approach will provide a new tool for material scientists in designing high-performance and safer energetic materials.
translated by 谷歌翻译
成功的人工智能系统通常需要大量标记的数据来从文档图像中提取信息。在本文中,我们研究了改善人工智能系统在理解文档图像中的性能的问题,尤其是在培训数据受到限制的情况下。我们通过使用加强学习提出一种新颖的填充方法来解决问题。我们的方法将信息提取模型视为策略网络,并使用策略梯度培训来更新模型,以最大程度地提高补充传统跨凝结损失的综合奖励功能。我们使用标签和专家反馈在四个数据集上进行的实验表明,我们的填充机制始终提高最先进的信息提取器的性能,尤其是在小型培训数据制度中。
translated by 谷歌翻译
无数据知识蒸馏(DFKD)最近引起了人们的关注,这要归功于其在不使用培训数据的情况下将知识从教师网络转移到学生网络的吸引力。主要思想是使用发电机合成数据以培训学生。随着发电机的更新,合成数据的分布将发生变化。如果发电机和学生接受对手的训练,使学生忘记了先前一步获得的知识,则这种分配转换可能会很大。为了减轻这个问题,我们提出了一种简单而有效的方法,称为动量对抗蒸馏(MAD),该方法维持了发电机的指数移动平均值(EMA)副本,并使用发电机和EMA生成器的合成样品来培训学生。由于EMA发电机可以被视为发电机旧版本的合奏,并且与发电机相比,更新的更改通常会发生较小的变化,因此对其合成样本进行培训可以帮助学生回顾过去的知识,并防止学生适应太快的速度发电机的新更新。我们在六个基准数据集上进行的实验,包括ImageNet和Place365,表明MAD的性能优于竞争方法来处理大型分配转移问题。我们的方法还与现有的DFKD方法相比,甚至在某些情况下达到了最新的方法。
translated by 谷歌翻译
我们提出了对使用Rademacher和Vapnik-Chervonenkis边界学习有条件的价值(VAR)和预期短缺的两步方法的非反应收敛分析。我们的VAR方法扩展到了一次学习的问题,该问题对应于不同的分数水平。这导致基于神经网络分位数和最小二乘回归的有效学习方案。引入了一个后验蒙特卡洛(非巢)程序,以估计地面真相和ES的距离,而无需访问后者。使用高斯玩具模型中的数值实验和财务案例研究中的目标是学习动态初始边缘的情况。
translated by 谷歌翻译
我们提出了一个数据收集和注释管道,该数据从越南放射学报告中提取信息,以提供胸部X射线(CXR)图像的准确标签。这可以通过注释与其特有诊断类别的数据相匹配,这些数据可能因国家而异。为了评估所提出的标签技术的功效,我们构建了一个包含9,752项研究的CXR数据集,并使用该数据集的子集评估了我们的管道。以F1得分为至少0.9923,评估表明,我们的标签工具在所有类别中都精确而始终如一。构建数据集后,我们训练深度学习模型,以利用从大型公共CXR数据集传输的知识。我们采用各种损失功能来克服不平衡的多标签数据集的诅咒,并使用各种模型体系结构进行实验,以选择提供最佳性能的诅咒。我们的最佳模型(CHEXPERT-FRECTER EDIDENENET-B2)的F1得分为0.6989(95%CI 0.6740,0.7240),AUC为0.7912,敏感性为0.7064,特异性为0.8760,普遍诊断为0.8760。最后,我们证明了我们的粗分类(基于五个特定的异常位置)在基准CHEXPERT数据集上获得了可比的结果(十二个病理),以进行一般异常检测,同时在所有类别的平均表现方面提供更好的性能。
translated by 谷歌翻译
元学习是一种处理不平衡和嘈杂标签学习的有效方法,但它取决于验证集,其中包含随机选择,手动标记和平衡的分布式样品。该验证集的随机选择和手动标记和平衡不仅是元学习的最佳选择,而且随着类的数量,它的缩放范围也很差。因此,最近的元学习论文提出了临时启发式方法来自动构建和标记此验证集,但是这些启发式方法仍然是元学习的最佳选择。在本文中,我们分析了元学习算法,并提出了新的标准来表征验证集的实用性,基于:1)验证集的信息性; 2)集合的班级分配余额; 3)集合标签的正确性。此外,我们提出了一种新的不平衡的嘈杂标签元学习(INOLML)算法,该算法会自动构建通过上面的标准最大化其实用程序来构建验证。我们的方法比以前的元学习方法显示出显着改进,并在几个基准上设定了新的最新技术。
translated by 谷歌翻译